Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 31
Filtrar
1.
Entropy (Basel) ; 26(1)2024 Jan 18.
Artículo en Inglés | MEDLINE | ID: mdl-38248207

RESUMEN

Slope Entropy (SlpEn) is a novel method recently proposed in the field of time series entropy estimation. In addition to the well-known embedded dimension parameter, m, used in other methods, it applies two additional thresholds, denoted as δ and γ, to derive a symbolic representation of a data subsequence. The original paper introducing SlpEn provided some guidelines for recommended specific values of these two parameters, which have been successfully followed in subsequent studies. However, a deeper understanding of the role of these thresholds is necessary to explore the potential for further SlpEn optimisations. Some works have already addressed the role of δ, but in this paper, we extend this investigation to include the role of γ and explore the impact of using an asymmetric scheme to select threshold values. We conduct a comparative analysis between the standard SlpEn method as initially proposed and an optimised version obtained through a grid search to maximise signal classification performance based on SlpEn. The results confirm that the optimised version achieves higher time series classification accuracy, albeit at the cost of significantly increased computational complexity.

2.
Entropy (Basel) ; 24(4)2022 Apr 05.
Artículo en Inglés | MEDLINE | ID: mdl-35455174

RESUMEN

Body temperature is usually employed in clinical practice by strict binary thresholding, aiming to classify patients as having fever or not. In the last years, other approaches based on the continuous analysis of body temperature time series have emerged. These are not only based on absolute thresholds but also on patterns and temporal dynamics of these time series, thus providing promising tools for early diagnosis. The present study applies three time series entropy calculation methods (Slope Entropy, Approximate Entropy, and Sample Entropy) to body temperature records of patients with bacterial infections and other causes of fever in search of possible differences that could be exploited for automatic classification. In the comparative analysis, Slope Entropy proved to be a stable and robust method that could bring higher sensitivity to the realm of entropy tools applied in this context of clinical thermometry. This method was able to find statistically significant differences between the two classes analyzed in all experiments, with sensitivity and specificity above 70% in most cases.

3.
Entropy (Basel) ; 24(10)2022 Oct 12.
Artículo en Inglés | MEDLINE | ID: mdl-37420476

RESUMEN

Many time series entropy calculation methods have been proposed in the last few years. They are mainly used as numerical features for signal classification in any scientific field where data series are involved. We recently proposed a new method, Slope Entropy (SlpEn), based on the relative frequency of differences between consecutive samples of a time series, thresholded using two input parameters, γ and δ. In principle, δ was proposed to account for differences in the vicinity of the 0 region (namely, ties) and, therefore, was usually set at small values such as 0.001. However, there is no study that really quantifies the role of this parameter using this default or other configurations, despite the good SlpEn results so far. The present paper addresses this issue, removing δ from the SlpEn calculation to assess its real influence on classification performance, or optimising its value by means of a grid search in order to find out if other values beyond the 0.001 value provide significant time series classification accuracy gains. Although the inclusion of this parameter does improve classification accuracy according to experimental results, gains of 5% at most probably do not support the additional effort required. Therefore, SlpEn simplification could be seen as a real alternative.

4.
Entropy (Basel) ; 25(1)2022 Dec 30.
Artículo en Inglés | MEDLINE | ID: mdl-36673207

RESUMEN

Slope Entropy (SlpEn) is a very recently proposed entropy calculation method. It is based on the differences between consecutive values in a time series and two new input thresholds to assign a symbol to each resulting difference interval. As the histogram normalisation value, SlpEn uses the actual number of unique patterns found instead of the theoretically expected value. This maximises the information captured by the method but, as a consequence, SlpEn results do not usually fall within the classical [0,1] interval. Although this interval is not necessary at all for time series classification purposes, it is a convenient and common reference framework when entropy analyses take place. This paper describes a method to keep SlpEn results within this interval, and improves the interpretability and comparability of this measure in a similar way as for other methods. It is based on a max-min normalisation scheme, described in two steps. First, an analytic normalisation is proposed using known but very conservative bounds. Afterwards, these bounds are refined using heuristics about the behaviour of the number of patterns found in deterministic and random time series. The results confirm the suitability of the approach proposed, using a mixture of the two methods.

5.
Entropy (Basel) ; 22(5)2020 Apr 25.
Artículo en Inglés | MEDLINE | ID: mdl-33286267

RESUMEN

Despite its widely tested and proven usefulness, there is still room for improvement in the basic permutation entropy (PE) algorithm, as several subsequent studies have demonstrated in recent years. Some of these new methods try to address the well-known PE weaknesses, such as its focus only on ordinal and not on amplitude information, and the possible detrimental impact of equal values found in subsequences. Other new methods address less specific weaknesses, such as the PE results' dependence on input parameter values, a common problem found in many entropy calculation methods. The lack of discriminating power among classes in some cases is also a generic problem when entropy measures are used for data series classification. This last problem is the one specifically addressed in the present study. Toward that purpose, the classification performance of the standard PE method was first assessed by conducting several time series classification tests over a varied and diverse set of data. Then, this performance was reassessed using a new Shannon Entropy normalisation scheme proposed in this paper: divide the relative frequencies in PE by the number of different ordinal patterns actually found in the time series, instead of by the theoretically expected number. According to the classification accuracy obtained, this last approach exhibited a higher class discriminating power. It was capable of finding significant differences in six out of seven experimental datasets-whereas the standard PE method only did in four-and it also had better classification accuracy. It can be concluded that using the additional information provided by the number of forbidden/found patterns, it is possible to achieve a higher discriminating power than using the classical PE normalisation method. The resulting algorithm is also very similar to that of PE and very easy to implement.

6.
Entropy (Basel) ; 22(9)2020 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-33286803

RESUMEN

Fever is a readily measurable physiological response that has been used in medicine for centuries. However, the information provided has been greatly limited by a plain thresholding approach, overlooking the additional information provided by temporal variations and temperature values below such threshold that are also representative of the subject status. In this paper, we propose to utilize continuous body temperature time series of patients that developed a fever, in order to apply a method capable of diagnosing the specific underlying fever cause only by means of a pattern relative frequency analysis. This analysis was based on a recently proposed measure, Slope Entropy, applied to a variety of records coming from dengue and malaria patients, among other fever diseases. After an input parameter customization, a classification analysis of malaria and dengue records took place, quantified by the Matthews Correlation Coefficient. This classification yielded a high accuracy, with more than 90% of the records correctly labelled in some cases, demonstrating the feasibility of the approach proposed. This approach, after further studies, or combined with more measures such as Sample Entropy, is certainly very promising in becoming an early diagnosis tool based solely on body temperature temporal patterns, which is of great interest in the current Covid-19 pandemic scenario.

7.
Entropy (Basel) ; 22(11)2020 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-33287011

RESUMEN

Bipolar Disorder (BD) is an illness with high prevalence and a huge social and economic impact. It is recurrent, with a long-term evolution in most cases. Early treatment and continuous monitoring have proven to be very effective in mitigating the causes and consequences of BD. However, no tools are currently available for a massive and semi-automatic BD patient monitoring and control. Taking advantage of recent technological developments in the field of wearables, this paper studies the feasibility of a BD episodes classification analysis while using entropy measures, an approach successfully applied in a myriad of other physiological frameworks. This is a very difficult task, since actigraphy records are highly non-stationary and corrupted with artifacts (no activity). The method devised uses a preprocessing stage to extract epochs of activity, and then applies a quantification measure, Slope Entropy, recently proposed, which outperforms the most common entropy measures used in biomedical time series. The results confirm the feasibility of the approach proposed, since the three states that are involved in BD, depression, mania, and remission, can be significantly distinguished.

8.
Med Biol Eng Comput ; 58(4): 785-804, 2020 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-32002753

RESUMEN

Continuous monitoring of breathing frequency (fB) could foster early prediction of adverse clinical effects and exacerbation of medical conditions. Current solutions are invasive or obtrusive and thus not suitable for prolonged monitoring outside the clinical setting. Previous studies demonstrated the feasibility of deriving fB by measuring inclination changes due to breathing using accelerometers or inertial measurement units (IMU). Nevertheless, few studies faced the problem of motion artifacts that limit the use of IMU-based systems for continuous monitoring. Moreover, few attempts have been made to move towards real portability and wearability of such devices. This paper proposes a wearable IMU-based device that communicates via Bluetooth with a smartphone, uploading data on a web server to allow remote monitoring. Two IMU units are placed on thorax and abdomen to record breathing-related movements, while a third IMU unit records body/trunk motion and is used as reference. The performance of the proposed system was evaluated in terms of long-acquisition-platform reliability showing good performances in terms of duration and data loss amount. The device was preliminarily tested in terms of accuracy in breathing temporal parameter measurement, in static condition, during postural changes, and during slight indoor activities showing favorable comparison against the reference methods (mean error breathing frequency < 5%). Graphical abstract Proof of concept of a wearable, wireless, modular respiratory Holter based on inertial measurement units (IMUS) for the continuous breathing pattern monitoring through the detection of chest wall breathing-related movements.


Asunto(s)
Monitoreo Fisiológico/instrumentación , Monitoreo Fisiológico/métodos , Respiración , Dispositivos Electrónicos Vestibles , Adulto , Algoritmos , Computadores , Diseño de Equipo , Ejercicio Físico , Femenino , Voluntarios Sanos , Humanos , Masculino , Aplicaciones Móviles , Postura , Reproducibilidad de los Resultados , Procesamiento de Señales Asistido por Computador , Torso
9.
Diabetes Metab Res Rev ; 36(4): e3287, 2020 05.
Artículo en Inglés | MEDLINE | ID: mdl-31916665

RESUMEN

BACKGROUND: The endoscopically implanted duodenal-jejunal bypass liner (DJBL) is an attractive alternative to bariatric surgery for obese diabetic patients. This article aims to study dynamical aspects of the glycaemic profile that may influence DJBL effects. METHODS: Thirty patients underwent DJBL implantation and were followed for 10 months. Continuous glucose monitoring (CGM) was performed before implantation and at month 10. Dynamical variables from CGM were measured: coefficient of variation of glycaemia, mean amplitude of glycaemic excursions (MAGE), detrended fluctuation analysis (DFA), % of time with glycaemia under 6.1 mmol/L (TU6.1), area over 7.8 mmol/L (AO7.8) and time in range. We analysed the correlation between changes in both anthropometric (body mass index, BMI and waist circumference) and metabolic (fasting blood glucose, FBG and HbA1c) variables and dynamical CGM-derived metrics and searched for variables in the basal CGM that could predict successful outcomes. RESULTS: There was a poor correlation between anthropometric and metabolic outcomes. There was a strong correlation between anthropometric changes and changes in glycaemic tonic control (∆BMI-∆TU6.1: rho = - 0.67, P < .01) and between metabolic outcomes and glycaemic phasic control (∆FBG-∆AO7.8: r = .60, P < .01). Basal AO7.8 was a powerful predictor of successful metabolic outcome (0.85 in patients with AO7.8 above the median vs 0.31 in patients with AO7.8 below the median: Chi-squared = 5.67, P = .02). CONCLUSIONS: In our population, anthropometric outcomes of DJBL correlate with improvement in tonic control of glycaemia, while metabolic outcomes correlate preferentially with improvement in phasic control. Assessment of basal phasic control may help in candidate profiling for DJBL implantation.


Asunto(s)
Diabetes Mellitus Tipo 2/cirugía , Duodeno/cirugía , Derivación Gástrica/métodos , Yeyuno/cirugía , Síndrome Metabólico/prevención & control , Obesidad Mórbida/cirugía , Adulto , Anciano , Biomarcadores/análisis , Glucemia/análisis , Diabetes Mellitus Tipo 2/complicaciones , Femenino , Estudios de Seguimiento , Hemoglobina Glucada/análisis , Humanos , Masculino , Síndrome Metabólico/etiología , Persona de Mediana Edad , Obesidad Mórbida/fisiopatología , Pronóstico , Pérdida de Peso
10.
PLoS One ; 14(12): e0225817, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31851681

RESUMEN

Complexity analysis of glucose time series with Detrended Fluctuation Analysis (DFA) has been proved to be useful for the prediction of type 2 diabetes mellitus (T2DM) development. We propose a modified DFA algorithm, review some of its characteristics and compare it with other metrics derived from continuous glucose monitorization in this setting. Several issues of the DFA algorithm were evaluated: (1) Time windowing: the best predictive value was obtained including all time-windows from 15 minutes to 24 hours. (2) Influence of circadian rhythms: for 48-hour glucometries, DFA alpha scaling exponent was calculated on 24-hour sliding segments (1-hour gap, 23-hour overlap), with a median coefficient of variation of 3.2%, which suggests that analysing time series of at least 24-hour length avoids the influence of circadian rhythms. (3) Influence of pretreatment of the time series through integration: DFA without integration was more sensitive to the introduction of white noise and it showed significant predictive power to forecast the development of T2DM, while the pretreated time series did not. (4) Robustness of an interpolation algorithm for missing values: The modified DFA algorithm evaluates the percentage of missing values in a time series. Establishing a 2% error threshold, we estimated the number and length of missing segments that could be admitted to consider a time series as suitable for DFA analysis. For comparison with other metrics, a Principal Component Analysis was performed and the results neatly tease out four different components. The first vector carries information concerned with variability, the second represents mainly DFA alpha exponent, while the third and fourth vectors carry essentially information related to the two "pre-diabetic behaviours" (impaired fasting glucose and impaired glucose tolerance). The scaling exponent obtained with the modified DFA algorithm proposed has significant predictive power for the development of T2DM in a high-risk population compared with other variability metrics or with the standard DFA algorithm.


Asunto(s)
Glucemia/análisis , Diabetes Mellitus Tipo 2/diagnóstico , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Algoritmos , Conjuntos de Datos como Asunto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Factores de Riesgo , Adulto Joven
11.
Math Biosci Eng ; 16(6): 6842-6857, 2019 07 26.
Artículo en Inglés | MEDLINE | ID: mdl-31698591

RESUMEN

Permutation Entropy (PE) is a very popular complexity analysis tool for time series. De-spite its simplicity, it is very robust and yields goods results in applications related to assessing the randomness of a sequence, or as a quantitative feature for signal classification. It is based on com-puting the Shannon entropy of the relative frequency of all the ordinal patterns found in a time series. However, there is a basic consensus on the fact that only analysing sample order and not amplitude might have a detrimental effect on the performance of PE. As a consequence, a number of methods based on PE have been proposed in the last years to include the possible influence of sample ampli-tude. These methods claim to outperform PE but there is no general comparative analysis that confirms such claims independently. Furthermore, other statistics such as Sample Entropy (SampEn) are based solely on amplitude, and it could be argued that other tools like this one are better suited to exploit the amplitude differences than PE. The present study quantifies the performance of the standard PE method and other amplitude-included PE methods using a disparity of time series to find out if there are really significant performance differences. In addition, the study compares statistics based uniquely on ordinal or amplitude patterns. The objective was to ascertain whether the whole was more than the sum of its parts. The results confirmed that highest classification accuracy was achieved using both types of patterns simultaneously, instead of using standard PE (ordinal patterns), or SampEn (ampli-tude patterns) isolatedly.

12.
Math Biosci Eng ; 17(1): 235-249, 2019 09 30.
Artículo en Inglés | MEDLINE | ID: mdl-31731349

RESUMEN

Fever is a common symptom of many diseases. Fever temporal patterns can be different depending on the specific pathology. Differentiation of diseases based on multiple mathematical features and visual observations has been recently studied in the scientific literature. However, the classification of diseases using a single mathematical feature has not been tried yet. The aim of the present study is to assess the feasibility of classifying diseases based on fever patterns using a single mathematical feature, specifically an entropy measure, Sample Entropy. This was an observational study. Analysis was carried out using 103 patients, 24 hour continuous tympanic temperature data. Sample Entropy feature was extracted from temperature data of patients. Grouping of diseases (infectious, tuberculosis, non-tuberculosis, and dengue fever) was made based on physicians diagnosis and laboratory findings. The quantitative results confirm the feasibility of the approach proposed, with an overall classification accuracy close to 70%, and the capability of finding significant differences for all the classes studied.


Asunto(s)
Diagnóstico por Computador , Fiebre/diagnóstico , Reconocimiento de Normas Patrones Automatizadas , Algoritmos , Temperatura Corporal , Enfermedades Transmisibles/diagnóstico , Dengue/diagnóstico , Estudios de Factibilidad , Fiebre/clasificación , Humanos , Modelos Teóricos , Reproducibilidad de los Resultados , Procesamiento de Señales Asistido por Computador , Termómetros , Tuberculosis/diagnóstico
13.
Math Biosci Eng ; 17(2): 1637-1658, 2019 12 10.
Artículo en Inglés | MEDLINE | ID: mdl-32233600

RESUMEN

Despite its widely demonstrated usefulness, there is still room for improvement in the basic Permutation Entropy (PE) algorithm, as several subsequent studies have proposed in the recent years. For example, some improved PE variants try to address possible PE weaknesses, such as its only focus on ordinal information, and not on amplitude, or the possible detrimental impact of equal values in subsequences due to motif ambiguity. Other evolved PE methods try to reduce the influence of input parameters. A good representative of this last point is the Bubble Entropy (BE) method. BE is based on sorting relations instead of ordinal patterns, and its promising capabilities have not been extensively assessed yet. The objective of the present study was to comparatively assess the classification performance of this new method, and study and exploit the possible synergies between PE and BE. The claimed superior performance of BE over PE was first evaluated by conducting a series of time series classification tests over a varied and diverse experimental set. The results of this assessment apparently suggested that there is a complementary relationship between PE and BE, instead of a superior/inferior relationship. A second set of experiments using PE and BE simultaneously as the input features of a clustering algorithm, demonstrated that with a proper algorithm configuration, classification accuracy and robustness can benefit from both measures.

14.
Entropy (Basel) ; 21(4)2019 Apr 10.
Artículo en Inglés | MEDLINE | ID: mdl-33267099

RESUMEN

Permutation Entropy (PE) is a time series complexity measure commonly used in a variety of contexts, with medicine being the prime example. In its general form, it requires three input parameters for its calculation: time series length N, embedded dimension m, and embedded delay τ . Inappropriate choices of these parameters may potentially lead to incorrect interpretations. However, there are no specific guidelines for an optimal selection of N, m, or τ , only general recommendations such as N > > m ! , τ = 1 , or m = 3 , … , 7 . This paper deals specifically with the study of the practical implications of N > > m ! , since long time series are often not available, or non-stationary, and other preliminary results suggest that low N values do not necessarily invalidate PE usefulness. Our study analyses the PE variation as a function of the series length N and embedded dimension m in the context of a diverse experimental set, both synthetic (random, spikes, or logistic model time series) and real-world (climatology, seismic, financial, or biomedical time series), and the classification performance achieved with varying N and m. The results seem to indicate that shorter lengths than those suggested by N > > m ! are sufficient for a stable PE calculation, and even very short time series can be robustly classified based on PE measurements before the stability point is reached. This may be due to the fact that there are forbidden patterns in chaotic time series, not all the patterns are equally informative, and differences among classes are already apparent at very short lengths.

15.
Comput Math Methods Med ; 2018: 1874651, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30008796

RESUMEN

Most cardiac arrhythmias can be classified as atrial flutter, focal atrial tachycardia, or atrial fibrillation. They have been usually treated using drugs, but catheter ablation has proven more effective. This is an invasive method devised to destroy the heart tissue that disturbs correct heart rhythm. In order to accurately localise the focus of this disturbance, the acquisition and processing of atrial electrograms form the usual mapping technique. They can be single potentials, double potentials, or complex fractionated atrial electrogram (CFAE) potentials, and last ones are the most effective targets for ablation. The electrophysiological substrate is then localised by a suitable signal processing method. Sample Entropy is a statistic scarcely applied to electrograms but can arguably become a powerful tool to analyse these time series, supported by its results in other similar biomedical applications. However, the lack of an analysis of its dependence on the perturbations usually found in electrogram data, such as missing samples or spikes, is even more marked. This paper applied SampEn to the segmentation between non-CFAE and CFAE records and assessed its class segmentation power loss at different levels of these perturbations. The results confirmed that SampEn was able to significantly distinguish between non-CFAE and CFAE records, even under very unfavourable conditions, such as 50% of missing data or 10% of spikes.


Asunto(s)
Fibrilación Atrial/diagnóstico , Técnicas Electrofisiológicas Cardíacas , Entropía , Electrofisiología Cardíaca , Ablación por Catéter , Humanos
16.
Entropy (Basel) ; 20(11)2018 Nov 06.
Artículo en Inglés | MEDLINE | ID: mdl-33266577

RESUMEN

Many entropy-related methods for signal classification have been proposed and exploited successfully in the last several decades. However, it is sometimes difficult to find the optimal measure and the optimal parameter configuration for a specific purpose or context. Suboptimal settings may therefore produce subpar results and not even reach the desired level of significance. In order to increase the signal classification accuracy in these suboptimal situations, this paper proposes statistical models created with uncorrelated measures that exploit the possible synergies between them. The methods employed are permutation entropy (PE), approximate entropy (ApEn), and sample entropy (SampEn). Since PE is based on subpattern ordinal differences, whereas ApEn and SampEn are based on subpattern amplitude differences, we hypothesized that a combination of PE with another method would enhance the individual performance of any of them. The dataset was composed of body temperature records, for which we did not obtain a classification accuracy above 80% with a single measure, in this study or even in previous studies. The results confirmed that the classification accuracy rose up to 90% when combining PE and ApEn with a logistic model.

17.
Entropy (Basel) ; 20(11)2018 Nov 12.
Artículo en Inglés | MEDLINE | ID: mdl-33266595

RESUMEN

This paper analyses the performance of SampEn and one of its derivatives, Fuzzy Entropy (FuzzyEn), in the context of artifacted blood glucose time series classification. This is a difficult and practically unexplored framework, where the availability of more sensitive and reliable measures could be of great clinical impact. Although the advent of new blood glucose monitoring technologies may reduce the incidence of the problems stated above, incorrect device or sensor manipulation, patient adherence, sensor detachment, time constraints, adoption barriers or affordability can still result in relatively short and artifacted records, as the ones analyzed in this paper or in other similar works. This study is aimed at characterizing the changes induced by such artifacts, enabling the arrangement of countermeasures in advance when possible. Despite the presence of these disturbances, results demonstrate that SampEn and FuzzyEn are sufficiently robust to achieve a significant classification performance, using records obtained from patients with duodenal-jejunal exclusion. The classification results, in terms of area under the ROC of up to 0.9, with several tests yielding AUC values also greater than 0.8, and in terms of a leave-one-out average classification accuracy of 80%, confirm the potential of these measures in this context despite the presence of artifacts, with SampEn having slightly better performance than FuzzyEn.

18.
Comput Biol Med ; 87: 141-151, 2017 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-28595129

RESUMEN

This paper evaluates the performance of first generation entropy metrics, featured by the well known and widely used Approximate Entropy (ApEn) and Sample Entropy (SampEn) metrics, and what can be considered an evolution from these, Fuzzy Entropy (FuzzyEn), in the Electroencephalogram (EEG) signal classification context. The study uses the commonest artifacts found in real EEGs, such as white noise, and muscular, cardiac, and ocular artifacts. Using two different sets of publicly available EEG records, and a realistic range of amplitudes for interfering artifacts, this work optimises and assesses the robustness of these metrics against artifacts in class segmentation terms probability. The results show that the qualitative behaviour of the two datasets is similar, with SampEn and FuzzyEn performing the best, and the noise and muscular artifacts are the most confounding factors. On the contrary, there is a wide variability as regards initialization parameters. The poor performance achieved by ApEn suggests that this metric should not be used in these contexts.


Asunto(s)
Electroencefalografía/métodos , Entropía , Artefactos , Lógica Difusa , Humanos , Procesamiento de Señales Asistido por Computador
19.
J Crit Care ; 37: 136-140, 2017 02.
Artículo en Inglés | MEDLINE | ID: mdl-27721181

RESUMEN

Body temperature monitoring provides health carers with key clinical information about the physiological status of patients. Temperature readings are taken periodically to detect febrile episodes and consequently implement the appropriate medical countermeasures. However, fever is often difficult to assess at early stages, or remains undetected until the next reading, probably a few hours later. The objective of this article is to develop a statistical model to forecast fever before a temperature threshold is exceeded to improve the therapeutic approach to the subjects involved. To this end, temperature series of 9 patients admitted to a general internal medicine ward were obtained with a continuous monitoring Holter device, collecting measurements of peripheral and core temperature once per minute. These series were used to develop different statistical models that could quantify the probability of having a fever spike in the following 60 minutes. A validation series was collected to assess the accuracy of the models. Finally, the results were compared with the analysis of some series by experienced clinicians. Two different models were developed: a logistic regression model and a linear discrimination analysis model. Both of them exhibited a fever peak forecasting accuracy greater than 84%. When compared with experts' assessment, both models identified 35 (97.2%) of 36 fever spikes. The models proposed are highly accurate in forecasting the appearance of fever spikes within a short period in patients with suspected or confirmed febrile-related illnesses.


Asunto(s)
Temperatura Corporal/fisiología , Fiebre/diagnóstico , Modelos Estadísticos , Cuidados Críticos , Enfermedad Crítica , Predicción , Humanos , Modelos Logísticos , Reproducibilidad de los Resultados
20.
Nonlinear Dynamics Psychol Life Sci ; 19(4): 419-36, 2015 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-26375934

RESUMEN

Many physiological systems are paradigmatic examples of complex networks, displaying behaviors best studied by means of tools derived from nonlinear dynamics and fractal geometry. Furthermore, while conventional wisdom considers health as an 'orderly' situation (and diseases are often called 'disorders'), truth is that health is characterized by a remarkable (pseudo)-randomness, and the loss of this pseudo-randomness (i.e., the 'decomplex-ification' of the system's output) is one of the earliest signs of the system's dysfunction. The potential clinical uses of this information are evident. However, the instruments used to assess complexity are still under debate, and these tools are just beginning to find their place at the bedside. We present a brief overview of the potential uses of complexity analysis in several areas of clinical medicine. We comment on the metrics most frequently used, and we review specifically their application on certain neurologic diseases, aging, diabetes, febrile diseases and the critically ill patient.


Asunto(s)
Envejecimiento/fisiología , Diabetes Mellitus/fisiopatología , Entropía , Fiebre/fisiopatología , Enfermedades del Sistema Nervioso/fisiopatología , Dinámicas no Lineales , Enfermedad Crítica , Fractales , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...